fourier measurement
Super-Resolution Off the Grid
Qingqing Huang, Sham M. Kakade
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some cutoff frequency); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees: The algorithm uses Fourier measurements, whose frequencies are bounded by O (1 /) (up to log factors).
Super-Resolution Off the Grid
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some \emph{cutoff frequency}); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least \Delta from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees:1.
Super-Resolution Off the Grid
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some cutoff frequency); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees: The algorithm uses Fourier measurements, whose frequencies are bounded by O(1/) (up to log factors).
New Developments in the field of Probability part1
Abstract: We study the existence and regularity of local times for general d-dimensional stochastic processes. We give a general condition for their existence and regularity properties. To emphasize the contribution of our results, we show that they include various prominent examples, among others solutions to stochastic differential equations driven by fractional Brownian motion, where the behavior of the local time was not fully understood up to now and remained as an open problem in the stochastic analysis literature. In particular this completes the picture regarding the local time behavior of such equations, above all includes high dimensions and both large and small Hurst parameters. As other main examples, we also show that by using our general approach, one can quite easily cover and extend some recently obtained results on the local times of the Rosenblatt process and Gaussian quasi-helices.
Generalized sampling with functional principal components for high-resolution random field estimation
In this paper, we take a statistical approach to the problem of recovering a function from low-resolution measurements taken with respect to an arbitrary basis, by regarding the function of interest as a realization of a random field. We introduce an infinite-dimensional framework for high-resolution estimation of a random field from its low-resolution indirect measurements as well as the high-resolution measurements of training observations by merging the existing frameworks of generalized sampling and functional principal component analysis. We study the statistical performance of the resulting estimation procedure and show that high-resolution recovery is indeed possible provided appropriate low-rank and angle conditions hold and provided the training set is sufficiently large relative to the desired resolution. We also consider sparse representations of the principle components, which can reduce the required size of the training set. Furthermore, the effectiveness of the proposed procedure is investigated in various numerical examples.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Florida > Orange County > Orlando (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (2 more...)
Super-Resolution Off the Grid
Huang, Qingqing, Kakade, Sham M.
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some \emph{cutoff frequency}); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have $k$ point sources in $d$ dimensions, where the points are separated by at least $\Delta$ from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees:1.
Deep S$^3$PR: Simultaneous Source Separation and Phase Retrieval Using Deep Generative Models
Metzler, Christopher A., Wetzstein, Gordon
This paper introduces and solves the simultaneous source separation and phase retrieval (S$^3$PR) problem. S$^3$PR shows up in a number application domains, most notably computational optics, where one has multiple independent coherent sources whose phase is difficult to measure. In general, S$^3$PR is highly under-determined, non-convex, and difficult to solve. In this work, we demonstrate that by restricting the solutions to lie in the range of a deep generative model, we can constrain the search space sufficiently to solve S$^3$PR.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > Japan > Honshū > Chūbu > Toyama Prefecture > Toyama (0.04)
Degrees of freedom for off-the-grid sparse estimation
Clarice Poon, Gabriel Peyr e † November 12, 2019 Abstract A central question in modern machine learning and imaging sciences is to quantify the number of effective parameters of vastly over-parameterized models. The degrees of freedom is a mathematically convenient way to define this number of parameters. Its computation and properties are well understood when dealing with discretized linear models, possibly regularized using sparsity. In this paper, we argue that this way of thinking is plagued when dealing with models having very large parameter spaces. In this case it makes more sense to consider "off-the-grid" approaches, using a continuous parameter space. This type of approach is the one favoured when training multi-layer perceptrons, and is also becoming popular to solve super-resolution problems in imaging. Training these off-the-grid models with a sparsity inducing prior can be achieved by solving a convex optimization problem over the space of measures, which is often called the Beurling Lasso (Blasso), and is the continuous counterpart of the celebrated Lasso parameter selection method. In previous works [41, 19], the degrees of freedom for the Lasso was shown to coincide with the size of the smallest solution support. Our main contribution is a proof of a continuous counterpart to this result for the Blasso. While in dimension d, each of the k nonzero recovered atom in the recovered measure carries over d 1 parameters ( d for the position and 1 for the weight), a surprising implication of our new formula it that the degrees of freedom for these off-the-grid models is in general strictly smaller ( d 1)k . Our findings thus suggest that discretized methods actually vastly overestimate the number of intrinsic continuous degrees of freedom. Our second contribution is a detailed study of the case of sampling Fourier coefficients in 1D, which corresponds to a super-resolution problem. We show that our formula for the degrees of freedom is valid outside of a set of measure zero of observations, which in turn justifies its use to compute an unbiased estimator of the prediction risk using the Stein Unbiased Risk Estimator (SURE). We also report numerical results for both the case of Fourier sampling and the learning of a multilayers perceptron with a single hidden layer.
- North America > United States > New York (0.04)
- Europe > United Kingdom (0.04)
- Europe > France (0.04)
prDeep: Robust Phase Retrieval with Flexible Deep Neural Networks
Metzler, Christopher A., Schniter, Philip, Veeraraghavan, Ashok, Baraniuk, Richard G.
Phase retrieval (PR) algorithms have become an important component in many modern computational imaging systems. For instance, in the context of ptychography and speckle correlation imaging PR algorithms enable imaging past the diffraction limit and through scattering media, respectively. Unfortunately, traditional PR algorithms struggle in the presence of noise. Recently PR algorithms have been developed that use priors to make themselves more robust. However, these algorithms often require unrealistic (Gaussian or coded diffraction pattern) measurement models and offer slow computation times. These drawbacks have hindered widespread adoption. In this work we use convolutional neural networks, a powerful tool from machine learning, to regularize phase retrieval problems and improve recovery performance. We test our new algorithm, prDeep, in simulation and demonstrate that it is robust to noise, can handle a variety system models, and operates fast enough for high-resolution applications.
- North America > United States > Texas > Harris County > Houston (0.04)
- North America > United States > Ohio > Franklin County > Columbus (0.04)